Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 7.562
1.
Nat Commun ; 15(1): 3941, 2024 May 10.
Article En | MEDLINE | ID: mdl-38729937

A relevant question concerning inter-areal communication in the cortex is whether these interactions are synergistic. Synergy refers to the complementary effect of multiple brain signals conveying more information than the sum of each isolated signal. Redundancy, on the other hand, refers to the common information shared between brain signals. Here, we dissociated cortical interactions encoding complementary information (synergy) from those sharing common information (redundancy) during prediction error (PE) processing. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded synergistic and redundant information about PE processing. The information conveyed by ERPs and BB signals was synergistic even at lower stages of the hierarchy in the auditory cortex and between auditory and frontal regions. Using a brain-constrained neural network, we simulated the synergy and redundancy observed in the experimental results and demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback, and feedforward connections. These results indicate that distributed representations of PE signals across the cortical hierarchy can be highly synergistic.


Acoustic Stimulation , Auditory Cortex , Callithrix , Electrocorticography , Animals , Auditory Cortex/physiology , Callithrix/physiology , Male , Female , Evoked Potentials/physiology , Frontal Lobe/physiology , Evoked Potentials, Auditory/physiology , Auditory Perception/physiology , Brain Mapping/methods
2.
Nat Commun ; 15(1): 3093, 2024 Apr 10.
Article En | MEDLINE | ID: mdl-38600118

Sensory-motor interactions in the auditory system play an important role in vocal self-monitoring and control. These result from top-down corollary discharges, relaying predictions about vocal timing and acoustics. Recent evidence suggests such signals may be two distinct processes, one suppressing neural activity during vocalization and another enhancing sensitivity to sensory feedback, rather than a single mechanism. Single-neuron recordings have been unable to disambiguate due to overlap of motor signals with sensory inputs. Here, we sought to disentangle these processes in marmoset auditory cortex during production of multi-phrased 'twitter' vocalizations. Temporal responses revealed two timescales of vocal suppression: temporally-precise phasic suppression during phrases and sustained tonic suppression. Both components were present within individual neurons, however, phasic suppression presented broadly regardless of frequency tuning (gating), while tonic was selective for vocal frequencies and feedback (prediction). This suggests that auditory cortex is modulated by concurrent corollary discharges during vocalization, with different computational mechanisms.


Auditory Cortex , Animals , Auditory Cortex/physiology , Neurons/physiology , Feedback, Sensory/physiology , Feedback , Callithrix/physiology , Vocalization, Animal/physiology , Auditory Perception/physiology , Acoustic Stimulation
3.
Nat Commun ; 15(1): 3116, 2024 Apr 10.
Article En | MEDLINE | ID: mdl-38600132

Spatiotemporally congruent sensory stimuli are fused into a unified percept. The auditory cortex (AC) sends projections to the primary visual cortex (V1), which could provide signals for binding spatially corresponding audio-visual stimuli. However, whether AC inputs in V1 encode sound location remains unknown. Using two-photon axonal calcium imaging and a speaker array, we measured the auditory spatial information transmitted from AC to layer 1 of V1. AC conveys information about the location of ipsilateral and contralateral sound sources to V1. Sound location could be accurately decoded by sampling AC axons in V1, providing a substrate for making location-specific audiovisual associations. However, AC inputs were not retinotopically arranged in V1, and audio-visual modulations of V1 neurons did not depend on the spatial congruency of the sound and light stimuli. The non-topographic sound localization signals provided by AC might allow the association of specific audiovisual spatial patterns in V1 neurons.


Auditory Cortex , Sound Localization , Visual Cortex , Visual Perception/physiology , Auditory Cortex/physiology , Neurons/physiology , Visual Cortex/physiology , Photic Stimulation/methods , Acoustic Stimulation/methods
4.
PLoS Comput Biol ; 20(4): e1011975, 2024 Apr.
Article En | MEDLINE | ID: mdl-38669271

The brain produces diverse functions, from perceiving sounds to producing arm reaches, through the collective activity of populations of many neurons. Determining if and how the features of these exogenous variables (e.g., sound frequency, reach angle) are reflected in population neural activity is important for understanding how the brain operates. Often, high-dimensional neural population activity is confined to low-dimensional latent spaces. However, many current methods fail to extract latent spaces that are clearly structured by exogenous variables. This has contributed to a debate about whether or not brains should be thought of as dynamical systems or representational systems. Here, we developed a new latent process Bayesian regression framework, the orthogonal stochastic linear mixing model (OSLMM) which introduces an orthogonality constraint amongst time-varying mixture coefficients, and provide Markov chain Monte Carlo inference procedures. We demonstrate superior performance of OSLMM on latent trajectory recovery in synthetic experiments and show superior computational efficiency and prediction performance on several real-world benchmark data sets. We primarily focus on demonstrating the utility of OSLMM in two neural data sets: µECoG recordings from rat auditory cortex during presentation of pure tones and multi-single unit recordings form monkey motor cortex during complex arm reaching. We show that OSLMM achieves superior or comparable predictive accuracy of neural data and decoding of external variables (e.g., reach velocity). Most importantly, in both experimental contexts, we demonstrate that OSLMM latent trajectories directly reflect features of the sounds and reaches, demonstrating that neural dynamics are structured by neural representations. Together, these results demonstrate that OSLMM will be useful for the analysis of diverse, large-scale biological time-series datasets.


Auditory Cortex , Bayes Theorem , Markov Chains , Models, Neurological , Neurons , Stochastic Processes , Animals , Rats , Auditory Cortex/physiology , Neurons/physiology , Computational Biology , Linear Models , Monte Carlo Method , Computer Simulation
5.
J Neurosci ; 44(19)2024 May 08.
Article En | MEDLINE | ID: mdl-38561224

Coordinated neuronal activity has been identified to play an important role in information processing and transmission in the brain. However, current research predominantly focuses on understanding the properties and functions of neuronal coordination in hippocampal and cortical areas, leaving subcortical regions relatively unexplored. In this study, we use single-unit recordings in female Sprague Dawley rats to investigate the properties and functions of groups of neurons exhibiting coordinated activity in the auditory thalamus-the medial geniculate body (MGB). We reliably identify coordinated neuronal ensembles (cNEs), which are groups of neurons that fire synchronously, in the MGB. cNEs are shown not to be the result of false-positive detections or by-products of slow-state oscillations in anesthetized animals. We demonstrate that cNEs in the MGB have enhanced information-encoding properties over individual neurons. Their neuronal composition is stable between spontaneous and evoked activity, suggesting limited stimulus-induced ensemble dynamics. These MGB cNE properties are similar to what is observed in cNEs in the primary auditory cortex (A1), suggesting that ensembles serve as a ubiquitous mechanism for organizing local networks and play a fundamental role in sensory processing within the brain.


Acoustic Stimulation , Geniculate Bodies , Neurons , Rats, Sprague-Dawley , Animals , Female , Rats , Neurons/physiology , Geniculate Bodies/physiology , Acoustic Stimulation/methods , Auditory Pathways/physiology , Action Potentials/physiology , Auditory Cortex/physiology , Auditory Cortex/cytology , Thalamus/physiology , Thalamus/cytology , Evoked Potentials, Auditory/physiology
6.
Article En | MEDLINE | ID: mdl-38557630

There is widespread interest and concern about the evidence and hypothesis that the auditory system is involved in ultrasound neuromodulation. We have addressed this problem by performing acoustic shear wave simulations in mouse skull and behavioral experiments in deaf mice. The simulation results showed that shear waves propagating along the skull did not reach sufficient acoustic pressure in the auditory cortex to modulate neurons. Behavioral experiments were subsequently performed to awaken anesthetized mice with ultrasound targeting the motor cortex or ventral tegmental area (VTA). The experimental results showed that ultrasound stimulation (US) of the target areas significantly increased arousal scores even in deaf mice, whereas the loss of ultrasound gel abolished the effect. Immunofluorescence staining also showed that ultrasound can modulate neurons in the target area, whereas neurons in the auditory cortex required the involvement of the normal auditory system for activation. In summary, the shear waves propagating along the skull cannot reach the auditory cortex and induce neuronal activation. Ultrasound neuromodulation-induced arousal behavior needs direct action on functionally relevant stimulation targets in the absence of auditory system participation.


Skull , Animals , Mice , Skull/diagnostic imaging , Skull/physiology , Auditory Cortex/physiology , Auditory Cortex/diagnostic imaging , Ultrasonic Waves , Ventral Tegmental Area/physiology , Ventral Tegmental Area/diagnostic imaging , Ventral Tegmental Area/radiation effects , Mice, Inbred C57BL , Male
7.
J Physiol ; 602(8): 1733-1757, 2024 Apr.
Article En | MEDLINE | ID: mdl-38493320

Differentiating between auditory signals of various emotional significance plays a crucial role in an individual's ability to thrive and excel in social interactions and in survival. Multiple approaches, including anatomical studies, electrophysiological investigations, imaging techniques, optogenetics and chemogenetics, have confirmed that the auditory cortex (AC) impacts fear-related behaviours driven by auditory stimuli by conveying auditory information to the lateral amygdala (LA) through long-range excitatory glutamatergic and GABAergic connections. In addition, the LA provides glutamatergic projections to the AC which are important to fear memory expression and are modified by associative fear learning. Here we test the hypothesis that the LA also sends long-range direct inhibitory inputs to the cortex. To address this fundamental question, we used anatomical and electrophysiological approaches, allowing us to directly assess the nature of GABAergic inputs from the LA to the AC in the mouse. Our findings elucidate the existence of a long-range inhibitory pathway from the LA to the AC (LAC) via parvalbumin-expressing (LAC-Parv) and somatostatin-expressing (LAC-SOM) neurons. This research identifies distinct electrophysiological properties for genetically defined long-range GABAergic neurons involved in the communication between the LA and the cortex (LAC-Parv inhibitory projections → AC neurons; LAC-Som inhibitory projections → AC neurons) within the lateral amygdala cortical network. KEY POINTS: The mouse auditory cortex receives inputs from the lateral amygdala. Retrograde viral tracing techniques allowed us to identify two previously undescribed lateral amygdala to auditory cortex (LAC) GABAergic projecting neurons. Extensive electrophysiological, morphological and anatomical characterization of LAC neurons is provided here, demonstrating key differences in the three populations. This study paves the way for a better understanding of the growing complexity of the cortico-amygdala-cortico circuit.


Auditory Cortex , Mice , Animals , Auditory Cortex/physiology , Amygdala/physiology , GABAergic Neurons/physiology , Parvalbumins/metabolism
8.
Curr Biol ; 34(8): 1605-1620.e5, 2024 Apr 22.
Article En | MEDLINE | ID: mdl-38492568

Sound elicits rapid movements of muscles in the face, ears, and eyes that protect the body from injury and trigger brain-wide internal state changes. Here, we performed quantitative facial videography from mice resting atop a piezoelectric force plate and observed that broadband sounds elicited rapid and stereotyped facial twitches. Facial motion energy (FME) adjacent to the whisker array was 30 dB more sensitive than the acoustic startle reflex and offered greater inter-trial and inter-animal reliability than sound-evoked pupil dilations or movement of other facial and body regions. FME tracked the low-frequency envelope of broadband sounds, providing a means to study behavioral discrimination of complex auditory stimuli, such as speech phonemes in noise. Approximately 25% of layer 5-6 units in the auditory cortex (ACtx) exhibited firing rate changes during facial movements. However, FME facilitation during ACtx photoinhibition indicated that sound-evoked facial movements were mediated by a midbrain pathway and modulated by descending corticofugal input. FME and auditory brainstem response (ABR) thresholds were closely aligned after noise-induced sensorineural hearing loss, yet FME growth slopes were disproportionately steep at spared frequencies, reflecting a central plasticity that matched commensurate changes in ABR wave 4. Sound-evoked facial movements were also hypersensitive in Ptchd1 knockout mice, highlighting the use of FME for identifying sensory hyper-reactivity phenotypes after adult-onset hyperacusis and inherited deficiencies in autism risk genes. These findings present a sensitive and integrative measure of hearing while also highlighting that even low-intensity broadband sounds can elicit a complex mixture of auditory, motor, and reafferent somatosensory neural activity.


Hearing , Animals , Mice , Male , Hearing/physiology , Sound , Acoustic Stimulation , Female , Auditory Cortex/physiology , Mice, Inbred C57BL , Movement , Evoked Potentials, Auditory, Brain Stem
9.
Sci Rep ; 14(1): 7078, 2024 03 25.
Article En | MEDLINE | ID: mdl-38528192

Mouse auditory cortex is composed of six sub-fields: primary auditory field (AI), secondary auditory field (AII), anterior auditory field (AAF), insular auditory field (IAF), ultrasonic field (UF) and dorsoposterior field (DP). Previous studies have examined thalamo-cortical connections in the mice auditory system and learned that AI, AAF, and IAF receive inputs from the ventral division of the medial geniculate body (MGB). However, the functional and thalamo-cortical connections between nonprimary auditory cortex (AII, UF, and DP) is unclear. In this study, we examined the locations of neurons projecting to these three cortical sub-fields in the MGB, and addressed the question whether these cortical sub-fields receive inputs from different subsets of MGB neurons or common. To examine the distributions of projecting neurons in the MGB, retrograde tracers were injected into the AII, UF, DP, after identifying these areas by the method of Optical Imaging. Our results indicated that neuron cells which in ventral part of dorsal MGB (MGd) and that of ventral MGB (MGv) projecting to UF and AII with less overlap. And DP only received neuron projecting from MGd. Interestingly, these three cortical areas received input from distinct part of MGd and MGv in an independent manner. Based on our foundings these three auditory cortical sub-fields in mice may independently process auditory information.


Auditory Cortex , Geniculate Bodies , Mice , Animals , Geniculate Bodies/physiology , Auditory Cortex/physiology , Neurons , Neurites , Auditory Pathways/physiology , Thalamus/physiology
10.
Cortex ; 174: 1-18, 2024 May.
Article En | MEDLINE | ID: mdl-38484435

Hearing-in-noise (HIN) ability is crucial in speech and music communication. Recent evidence suggests that absolute pitch (AP), the ability to identify isolated musical notes, is associated with HIN benefits. A theoretical account postulates a link between AP ability and neural network indices of segregation. However, how AP ability modulates the brain activation and functional connectivity underlying HIN perception remains unclear. Here we used functional magnetic resonance imaging to contrast brain responses among a sample (n = 45) comprising 15 AP musicians, 15 non-AP musicians, and 15 non-musicians in perceiving Mandarin speech and melody targets under varying signal-to-noise ratios (SNRs: No-Noise, 0, -9 dB). Results reveal that AP musicians exhibited increased activation in auditory and superior frontal regions across both HIN domains (music and speech), irrespective of noise levels. Notably, substantially higher sensorimotor activation was found in AP musicians when the target was music compared to speech. Furthermore, we examined AP effects on neural connectivity using psychophysiological interaction analysis with the auditory cortex as the seed region. AP musicians showed decreased functional connectivity with the sensorimotor and middle frontal gyrus compared to non-AP musicians. Crucially, AP differentially affected connectivity with parietal and frontal brain regions depending on the HIN domain being music or speech. These findings suggest that AP plays a critical role in HIN perception, manifested by increased activation and functional independence between auditory and sensorimotor regions for perceiving music and speech streams.


Auditory Cortex , Music , Speech Perception , Humans , Brain/physiology , Auditory Perception/physiology , Hearing , Auditory Cortex/physiology , Brain Mapping , Speech Perception/physiology , Pitch Perception/physiology , Acoustic Stimulation
11.
J Neurosci ; 44(17)2024 Apr 24.
Article En | MEDLINE | ID: mdl-38508715

Previous studies have demonstrated that auditory cortex activity can be influenced by cross-sensory visual inputs. Intracortical laminar recordings in nonhuman primates have suggested a feedforward (FF) type profile for auditory evoked but feedback (FB) type for visual evoked activity in the auditory cortex. To test whether cross-sensory visual evoked activity in the auditory cortex is associated with FB inputs also in humans, we analyzed magnetoencephalography (MEG) responses from eight human subjects (six females) evoked by simple auditory or visual stimuli. In the estimated MEG source waveforms for auditory cortex regions of interest, auditory evoked response showed peaks at 37 and 90 ms and visual evoked response at 125 ms. The inputs to the auditory cortex were modeled through FF- and FB-type connections targeting different cortical layers using the Human Neocortical Neurosolver (HNN), which links cellular- and circuit-level mechanisms to MEG signals. HNN modeling suggested that the experimentally observed auditory response could be explained by an FF input followed by an FB input, whereas the cross-sensory visual response could be adequately explained by just an FB input. Thus, the combined MEG and HNN results support the hypothesis that cross-sensory visual input in the auditory cortex is of FB type. The results also illustrate how the dynamic patterns of the estimated MEG source activity can provide information about the characteristics of the input into a cortical area in terms of the hierarchical organization among areas.


Acoustic Stimulation , Auditory Cortex , Evoked Potentials, Visual , Magnetoencephalography , Photic Stimulation , Humans , Auditory Cortex/physiology , Magnetoencephalography/methods , Female , Male , Adult , Photic Stimulation/methods , Evoked Potentials, Visual/physiology , Acoustic Stimulation/methods , Models, Neurological , Young Adult , Evoked Potentials, Auditory/physiology , Neurons/physiology , Brain Mapping/methods
12.
PLoS Biol ; 22(3): e3002534, 2024 Mar.
Article En | MEDLINE | ID: mdl-38466713

Selective attention-related top-down modulation plays a significant role in separating relevant speech from irrelevant background speech when vocal attributes separating concurrent speakers are small and continuously evolving. Electrophysiological studies have shown that such top-down modulation enhances neural tracking of attended speech. Yet, the specific cortical regions involved remain unclear due to the limited spatial resolution of most electrophysiological techniques. To overcome such limitations, we collected both electroencephalography (EEG) (high temporal resolution) and functional magnetic resonance imaging (fMRI) (high spatial resolution), while human participants selectively attended to speakers in audiovisual scenes containing overlapping cocktail party speech. To utilise the advantages of the respective techniques, we analysed neural tracking of speech using the EEG data and performed representational dissimilarity-based EEG-fMRI fusion. We observed that attention enhanced neural tracking and modulated EEG correlates throughout the latencies studied. Further, attention-related enhancement of neural tracking fluctuated in predictable temporal profiles. We discuss how such temporal dynamics could arise from a combination of interactions between attention and prediction as well as plastic properties of the auditory cortex. EEG-fMRI fusion revealed attention-related iterative feedforward-feedback loops between hierarchically organised nodes of the ventral auditory object related processing stream. Our findings support models where attention facilitates dynamic neural changes in the auditory cortex, ultimately aiding discrimination of relevant sounds from irrelevant ones while conserving neural resources.


Auditory Cortex , Speech Perception , Humans , Speech Perception/physiology , Speech , Feedback , Electroencephalography/methods , Auditory Cortex/physiology , Acoustic Stimulation/methods
13.
eNeuro ; 11(3)2024 Mar.
Article En | MEDLINE | ID: mdl-38467426

Auditory perception can be significantly disrupted by noise. To discriminate sounds from noise, auditory scene analysis (ASA) extracts the functionally relevant sounds from acoustic input. The zebra finch communicates in noisy environments. Neurons in their secondary auditory pallial cortex (caudomedial nidopallium, NCM) can encode song from background chorus, or scenes, and this capacity may aid behavioral ASA. Furthermore, song processing is modulated by the rapid synthesis of neuroestrogens when hearing conspecific song. To examine whether neuroestrogens support neural and behavioral ASA in both sexes, we retrodialyzed fadrozole (aromatase inhibitor, FAD) and recorded in vivo awake extracellular NCM responses to songs and scenes. We found that FAD affected neural encoding of songs by decreasing responsiveness and timing reliability in inhibitory (narrow-spiking), but not in excitatory (broad-spiking) neurons. Congruently, FAD decreased neural encoding of songs in scenes for both cell types, particularly in females. Behaviorally, we trained birds using operant conditioning and tested their ability to detect songs in scenes after administering FAD orally or injected bilaterally into NCM. Oral FAD increased response bias and decreased correct rejections in females, but not in males. FAD in NCM did not affect performance. Thus, FAD in the NCM impaired neuronal ASA but that did not lead to behavioral disruption suggesting the existence of resilience or compensatory responses. Moreover, impaired performance after systemic FAD suggests involvement of other aromatase-rich networks outside the auditory pathway in ASA. This work highlights how transient estrogen synthesis disruption can modulate higher-order processing in an animal model of vocal communication.


Auditory Cortex , Finches , Female , Animals , Male , Finches/physiology , Aromatase , Reproducibility of Results , Vocalization, Animal/physiology , Acoustic Stimulation , Auditory Pathways/physiology , Auditory Perception/physiology , Auditory Cortex/physiology
14.
Hear Res ; 445: 108993, 2024 Apr.
Article En | MEDLINE | ID: mdl-38518392

Tinnitus is known to affect 10-15 % of the population, severely impacting 1-2 % of those afflicted. Canonically, tinnitus is generally a consequence of peripheral auditory damage resulting in maladaptive plastic changes in excitatory/inhibitory homeostasis at multiple levels of the central auditory pathway as well as changes in diverse nonauditory structures. Animal studies of primary auditory cortex (A1) generally find tinnitus-related changes in excitability across A1 layers and differences between inhibitory neuronal subtypes. Changes due to sound-exposure include changes in spontaneous activity, cross-columnar synchrony, bursting and tonotopic organization. Few studies in A1 directly correlate tinnitus-related changes in neural activity to an individual animal's behavioral evidence of tinnitus. The present study used an established condition-suppression sound-exposure model of chronic tinnitus and recorded spontaneous and driven single-unit responses from A1 layers 5 and 6 of awake Long-Evans rats. A1 units recorded from animals with behavioral evidence of tinnitus showed significant increases in spontaneous and sound-evoked activity which directly correlated to the animal's tinnitus score. Significant increases in the number of bursting units, the number of bursts/minute and burst duration were seen for A1 units recorded from animals with behavioral evidence of tinnitus. The present A1 findings support prior unit recording studies in auditory thalamus and recent in vitro findings in this same animal model. The present findings are consistent with sensory cortical studies showing tinnitus- and neuropathic pain-related down-regulation of inhibition and increased excitation based on plastic neurotransmitter and potassium channel changes. Reducing A1 deep-layer tinnitus-related hyperactivity is a potential target for tinnitus pharmacotherapy.


Auditory Cortex , Tinnitus , Rats , Animals , Auditory Cortex/physiology , Tinnitus/metabolism , Wakefulness , Rats, Long-Evans , Auditory Pathways/metabolism
15.
Hum Brain Mapp ; 45(2): e26572, 2024 Feb 01.
Article En | MEDLINE | ID: mdl-38339905

Tau rhythms are largely defined by sound responsive alpha band (~8-13 Hz) oscillations generated largely within auditory areas of the superior temporal gyri. Studies of tau have mostly employed magnetoencephalography or intracranial recording because of tau's elusiveness in the electroencephalogram. Here, we demonstrate that independent component analysis (ICA) decomposition can be an effective way to identify tau sources and study tau source activities in EEG recordings. Subjects (N = 18) were passively exposed to complex acoustic stimuli while the EEG was recorded from 68 electrodes across the scalp. Subjects' data were split into 60 parallel processing pipelines entailing use of five levels of high-pass filtering (passbands of 0.1, 0.5, 1, 2, and 4 Hz), three levels of low-pass filtering (25, 50, and 100 Hz), and four different ICA algorithms (fastICA, infomax, adaptive mixture ICA [AMICA], and multi-model AMICA [mAMICA]). Tau-related independent component (IC) processes were identified from this data as being localized near the superior temporal gyri with a spectral peak in the 8-13 Hz alpha band. These "tau ICs" showed alpha suppression during sound presentations that was not seen for other commonly observed IC clusters with spectral peaks in the alpha range (e.g., those associated with somatomotor mu, and parietal or occipital alpha). The choice of analysis parameters impacted the likelihood of obtaining tau ICs from an ICA decomposition. Lower cutoff frequencies for high-pass filtering resulted in significantly fewer subjects showing a tau IC than more aggressive high-pass filtering. Decomposition using the fastICA algorithm performed the poorest in this regard, while mAMICA performed best. The best combination of filters and ICA model choice was able to identify at least one tau IC in the data of ~94% of the sample. Altogether, the data reveal close similarities between tau EEG IC dynamics and tau dynamics observed in MEG and intracranial data. Use of relatively aggressive high-pass filters and mAMICA decomposition should allow researchers to identify and characterize tau rhythms in a majority of their subjects. We believe adopting the ICA decomposition approach to EEG analysis can increase the rate and range of discoveries related to auditory responsive tau rhythms.


Auditory Cortex , Brain Waves , Humans , Algorithms , Auditory Cortex/physiology , Magnetoencephalography
16.
Cell Rep ; 43(2): 113758, 2024 Feb 27.
Article En | MEDLINE | ID: mdl-38358887

Meaningful auditory memories are formed in adults when acoustic information is delivered to the auditory cortex during heightened states of attention, vigilance, or alertness, as mediated by neuromodulatory circuits. Here, we identify that, in awake mice, acoustic stimulation triggers auditory thalamocortical projections to release adenosine, which prevents cortical plasticity (i.e., selective expansion of neural representation of behaviorally relevant acoustic stimuli) and perceptual learning (i.e., experience-dependent improvement in frequency discrimination ability). This sound-evoked adenosine release (SEAR) becomes reduced within seconds when acoustic stimuli are tightly paired with the activation of neuromodulatory (cholinergic or dopaminergic) circuits or periods of attentive wakefulness. If thalamic adenosine production is enhanced, then SEAR elevates further, the neuromodulatory circuits are unable to sufficiently reduce SEAR, and associative cortical plasticity and perceptual learning are blocked. This suggests that transient low-adenosine periods triggered by neuromodulatory circuits permit associative cortical plasticity and auditory perceptual learning in adults to occur.


Auditory Cortex , Animals , Mice , Auditory Cortex/physiology , Adenosine , Learning/physiology , Acoustic Stimulation , Sound
17.
Cell Rep ; 43(3): 113864, 2024 Mar 26.
Article En | MEDLINE | ID: mdl-38421870

The neural mechanisms underlying novelty detection are not well understood, especially in relation to behavior. Here, we present single-unit responses from the primary auditory cortex (A1) from two monkeys trained to detect deviant tones amid repetitive ones. Results show that monkeys can detect deviant sounds, and there is a strong correlation between late neuronal responses (250-350 ms after deviant onset) and the monkeys' perceptual decisions. The magnitude and timing of both neuronal and behavioral responses are increased by larger frequency differences between the deviant and standard tones and by increasing the number of standard tones preceding the deviant. This suggests that A1 neurons encode novelty detection in behaving monkeys, influenced by stimulus relevance and expectations. This study provides evidence supporting aspects of predictive coding in the sensory cortex.


Auditory Cortex , Evoked Potentials, Auditory , Evoked Potentials, Auditory/physiology , Acoustic Stimulation/methods , Auditory Cortex/physiology , Neurons/physiology
18.
Cereb Cortex ; 34(2)2024 01 31.
Article En | MEDLINE | ID: mdl-38367612

Consequences of perceptual training, such as improvements in discriminative ability, are highly stimulus and task specific. Therefore, most studies on auditory training-induced plasticity in adult brain have focused on the sensory aspects, particularly on functional and structural effects in the auditory cortex. Auditory training often involves, other than auditory demands, significant cognitive components. Yet, how auditory training affects cognition-related brain regions, such as the hippocampus, remains unclear. Here, we found in female rats that auditory cue-based go/no-go training significantly improved the memory-guided behaviors associated with hippocampus. The long-term potentiations of the trained rats recorded in vivo in the hippocampus were also enhanced compared with the naïve rats. In parallel, the phosphorylation level of calcium/calmodulin-dependent protein kinase II and the expression of parvalbumin-positive interneurons in the hippocampus were both upregulated. These findings demonstrate that auditory training substantially remodels the processing and function of brain regions beyond the auditory system, which are associated with task demands.


Auditory Cortex , Hippocampus , Rats , Female , Animals , Hippocampus/physiology , Brain , Long-Term Potentiation , Auditory Cortex/physiology
19.
J Neurosci ; 44(15)2024 Apr 10.
Article En | MEDLINE | ID: mdl-38388426

Real-world listening settings often consist of multiple concurrent sound streams. To limit perceptual interference during selective listening, the auditory system segregates and filters the relevant sensory input. Previous work provided evidence that the auditory cortex is critically involved in this process and selectively gates attended input toward subsequent processing stages. We studied at which level of auditory cortex processing this filtering of attended information occurs using functional magnetic resonance imaging (fMRI) and a naturalistic selective listening task. Forty-five human listeners (of either sex) attended to one of two continuous speech streams, presented either concurrently or in isolation. Functional data were analyzed using an inter-subject analysis to assess stimulus-specific components of ongoing auditory cortex activity. Our results suggest that stimulus-related activity in the primary auditory cortex and the adjacent planum temporale are hardly affected by attention, whereas brain responses at higher stages of the auditory cortex processing hierarchy become progressively more selective for the attended input. Consistent with these findings, a complementary analysis of stimulus-driven functional connectivity further demonstrated that information on the to-be-ignored speech stream is shared between the primary auditory cortex and the planum temporale but largely fails to reach higher processing stages. Our findings suggest that the neural processing of ignored speech cannot be effectively suppressed at the level of early cortical processing of acoustic features but is gradually attenuated once the competing speech streams are fully segregated.


Auditory Cortex , Speech Perception , Humans , Auditory Cortex/diagnostic imaging , Auditory Cortex/physiology , Speech Perception/physiology , Temporal Lobe , Magnetic Resonance Imaging , Attention/physiology , Auditory Perception/physiology , Acoustic Stimulation
20.
Sci Adv ; 10(7): eadk0010, 2024 Feb 16.
Article En | MEDLINE | ID: mdl-38363839

Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception varies along several pitch-based dimensions: (i) the absolute pitch of notes, (ii) the difference in pitch between successive notes, and (iii) the statistical expectation of each note given prior context. How the brain represents these dimensions and whether their encoding is specialized for music remains unknown. We recorded high-density neurophysiological activity directly from the human auditory cortex while participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial map for representing distinct melodic dimensions. The same participants listened to spoken English, and we compared responses to music and speech. Cortical sites selective for music encoded expectation, while sites that encoded pitch and pitch-change in music used the same neural code to represent equivalent properties of speech. Findings reveal how the perception of melody recruits both music-specific and general-purpose sound representations.


Auditory Cortex , Music , Humans , Pitch Perception/physiology , Auditory Cortex/physiology , Brain/physiology , Language
...